Modeling Car Insurance Claim Outcomes

DataCamp Project about building a glm model to predict whether a customer will make a claim or not.
R
DataCamp
tidyverse
Published

March 25, 2024

Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries, insurance is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!

Knowing all of this, On the Road car insurance has requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they’ve asked you to use simple Logistic Regression, identifying the single feature that results in the best-performing model, as measured by accuracy.

They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table (below) detailing the column names and descriptions below.

1 The Dataset description

Column Description
id Unique client identifier
age

Client’s age:

  • 0: 16-25
  • 1: 26-39
  • 2: 40-64
  • 3: 65+
gender

Client’s gender:

  • 0: Female
  • 1: Male
driving_experience

Years the client has been driving:

  • 0: 0-9
  • 1: 10-19
  • 2: 20-29
  • 3: 30+
education

Client’s level of education:

  • 0: No education
  • 1: High school
  • 2: University
income

Client’s income level:

  • 0: Poverty
  • 1: Working class
  • 2: Middle class
  • 3: Upper class
credit_score Client’s credit score (between zero and one)
vehicle_ownership

Client’s vehicle ownership status:

  • 0: Does not own their vehilce (paying off finance)
  • 1: Owns their vehicle
vehcile_year

Year of vehicle registration:

  • 0: Before 2015
  • 1: 2015 or later
married

Client’s marital status:

  • 0: Not married
  • 1: Married
children Client’s number of children
postal_code Client’s postal code
annual_mileage Number of miles driven by the client each year
vehicle_type

Type of car:

  • 0: Sedan
  • 1: Sports car
speeding_violations Total number of speeding violations received by the client
duis Number of times the client has been caught driving under the influence of alcohol
past_accidents Total number of previous accidents the client has been involved in
outcome

Whether the client made a claim on their car insurance (response variable):

  • 0: No claim
  • 1: Made a claim

2 Objectives

The Head of Data at On the Road car insurance has asked for your support as they venture into the world of machine learning! They would like you to start by investigating their customer data and cleaning it in preparation for modeling. Once that is complete, they would like you to tell them which feature produces the best accuracy for predicting whether a customer will make a car insurance claim. Specifically, they have set the following tasks:

  • Investigate and clean the data, so that there are no missing values and remove the “id” column.
  • Find the feature with the best predictive performance for a car insurance claim (“outcome”) by creating simple Logistic Regression models (each with a single feature) and assessing their accuracy.
  • Create a data frame called best_feature_df, containing columns named “best_feature” and “best_accuracy” with the name of the feature with the highest accuracy, and the respective accuracy score.

3 Used libraries

4 Cleaning the data

  • Reading and inspecting data : The summary function indicates that credit_score and annual_mileage have null values.
data <- read_csv("car_insurance.csv", show_col_types = FALSE)
head(data)
clean_data <- data %>% select(-id)
summary(is.na(clean_data))
    age            gender           race         driving_experience
 Mode :logical   Mode :logical   Mode :logical   Mode :logical     
 FALSE:10000     FALSE:10000     FALSE:10000     FALSE:10000       
                                                                   
 education         income        credit_score    vehicle_ownership
 Mode :logical   Mode :logical   Mode :logical   Mode :logical    
 FALSE:10000     FALSE:10000     FALSE:9018      FALSE:10000      
                                 TRUE :982                        
 vehicle_year     married         children       postal_code    
 Mode :logical   Mode :logical   Mode :logical   Mode :logical  
 FALSE:10000     FALSE:10000     FALSE:10000     FALSE:10000    
                                                                
 annual_mileage  vehicle_type    speeding_violations    duis        
 Mode :logical   Mode :logical   Mode :logical       Mode :logical  
 FALSE:9043      FALSE:10000     FALSE:10000         FALSE:10000    
 TRUE :957                                                          
 past_accidents   outcome       
 Mode :logical   Mode :logical  
 FALSE:10000     FALSE:10000    
                                
  • Replacing missing data : Replacing missing values with the mean value of each column is more convenient than deleting rows, as we can’t just delete approximately 10% of the dataset.
clean_data$annual_mileage[is.na(clean_data$annual_mileage)] <- mean(clean_data$annual_mileage, na.rm = T)
clean_data$credit_score[is.na(clean_data$credit_score)] <- mean(clean_data$credit_score, na.rm = T)

5 Preparing the Logistic Regression models

  • Extracting important values to avoid long lines of code.
outcome <- clean_data$outcome
features <- names(clean_data %>% select(-outcome))
features_scores <- tibble(features)
  • Using the glue function to call each column using its name (of type string) and to facilitate joining the accuracy score of each feature to the feature_scores tibble.
for(col in features){
  models <- glm(glue('outcome ~ {col}'), data = clean_data, family = "binomial")
  predictions <- round(fitted(models))
  accuracy_score <- length(which(predictions == outcome))/length(outcome)
  features_scores[which(features_scores$features == col),
                  "accuracy_score"] = accuracy_score
}
  • The results are in.
head(features_scores %>% arrange(desc(accuracy_score)))

6 Checking the results and extracting them

  • The feature with the best predictive performance for a car insurance claim is driving_experience with a score of 0.7771
best_feature <- features_scores$features[which.max(features_scores$accuracy_score)]
best_accuracy <- max(features_scores$accuracy_score)
best_feature_df <- data.frame(best_feature, best_accuracy)
best_feature_df
Using correlation as an alternative

An initial test to try and discover which features are more influential is calculating the correlation of each one to the outcome column (We will compare it to the result from the models). It reveals that driving_experience, age and income are more likely to result in a negative outcome than other features (due to the negative correlation value), while annual_mileage and surprisingly gender are more likely to result in a claim made compared to the other columns.

cor_table <- tibble(features, cor = cor(subset(clean_data, select = -outcome), outcome))
head(cor_table)

Comparing the results reveals that correlation, despite predicting that the first five features have higher accuracy scores, wasn’t a sufficiently good practice for the rest of the features.

full_table <- cor_table %>%
  inner_join(features_scores, by = "features") %>%
  arrange(desc(accuracy_score))
full_table